Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(4): e0300710, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38598482

RESUMO

How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.


Assuntos
Revisão da Pesquisa por Pares , Revisão por Pares , Masculino , Feminino , Humanos , Inquéritos e Questionários
2.
PLoS One ; 18(7): e0287443, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37437010

RESUMO

Peer review is the backbone of academia and humans constitute a cornerstone of this process, being responsible for reviewing submissions and making the final acceptance/rejection decisions. Given that human decision-making is known to be susceptible to various cognitive biases, it is important to understand which (if any) biases are present in the peer-review process, and design the pipeline such that the impact of these biases is minimized. In this work, we focus on the dynamics of discussions between reviewers and investigate the presence of herding behaviour therein. Specifically, we aim to understand whether reviewers and discussion chairs get disproportionately influenced by the first argument presented in the discussion when (in case of reviewers) they form an independent opinion about the paper before discussing it with others. In conjunction with the review process of a large, top tier machine learning conference, we design and execute a randomized controlled trial that involves 1,544 papers and 2,797 reviewers with the goal of testing for the conditional causal effect of the discussion initiator's opinion on the outcome of a paper. Our experiment reveals no evidence of herding in peer-review discussions. This observation is in contrast with past work that has documented an undue influence of the first piece of information on the final decision (e.g., anchoring effect) and analyzed herding behaviour in other applications (e.g., financial markets). Regarding policy implications, the absence of the herding effect suggests that the current status quo of the absence of a unified policy towards discussion initiation does not result in an increased arbitrariness of the resulting decisions.


Assuntos
Revisão por Pares , Conformidade Social , Humanos
3.
PLoS One ; 18(7): e0283980, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37418377

RESUMO

Citations play an important role in researchers' careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer's own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer's work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.


Assuntos
Revisão por Pares , Pesquisadores , Humanos , Estudos Prospectivos , Viés , Aprendizado de Máquina , Revisão da Pesquisa por Pares
4.
PLoS One ; 18(6): e0286206, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37342992

RESUMO

There is widespread debate on whether to anonymize author identities in peer review. The key argument for anonymization is to mitigate bias, whereas arguments against anonymization posit various uses of author identities in the review process. The Innovations in Theoretical Computer Science (ITCS) 2023 conference adopted a middle ground by initially anonymizing the author identities from reviewers, revealing them after the reviewer had submitted their initial reviews, and allowing the reviewer to change their review subsequently. We present an analysis of the reviews pertaining to the identification and use of author identities. Our key findings are: (I) A majority of reviewers self-report not knowing and being unable to guess the authors' identities for the papers they were reviewing. (II) After the initial submission of reviews, 7.1% of reviews changed their overall merit score and 3.8% changed their self-reported reviewer expertise. (III) There is a very weak and statistically insignificant correlation of the rank of authors' affiliations with the change in overall merit; there is a weak but statistically significant correlation with respect to change in reviewer expertise. We also conducted an anonymous survey to obtain opinions from reviewers and authors. The main findings from the 200 survey responses are: (i) A vast majority of participants favor anonymizing author identities in some form. (ii) The "middle-ground" initiative of ITCS 2023 was appreciated. (iii) Detecting conflicts of interest is a challenge that needs to be addressed if author identities are anonymized. Overall, these findings support anonymization of author identities in some form (e.g., as was done in ITCS 2023), as long as there is a robust and efficient way to check conflicts of interest.


Assuntos
Atitude , Revisão por Pares , Humanos , Inquéritos e Questionários , Autorrelato , Viés , Revisão da Pesquisa por Pares
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...